Walmart Weekly Sales Forecasting: Model Comparison

Author

Machine Learning UTEC Homework

1 Objective

This report compares six forecasting approaches for weekly Walmart sales:

  1. Local Linear Anomaly Model (Phase 1).
  2. Bayesian Structural AR (Phase 2, full data).
  3. Elastic Net anomaly baseline (Phase 3).
  4. Random Forest anomaly baseline (Phase 4).
  5. ETS anomaly baseline (Phase 5).
  6. AdaBoost anomaly baseline (Phase 7).

Additionally, we include a Phase 1-lite diagnostic variant in the final comparison to show the impact of removing the extra exogenous variables from Local Linear.

Evaluation uses forward-chaining validation and weighted MAE (WMAE).

2 Data and Evaluation Setup

Artifact Value
0 Phase 1 OOF rows 154386
1 Phase 1-lite OOF rows 154386
2 Phase 2 OOF rows 154386
3 Elastic Net OOF rows 154386
4 Random Forest OOF rows 154386
5 ETS OOF rows 154386
6 AdaBoost OOF rows 154386

Validation metric:

\[ \text{WMAE} = \frac{\sum_i w_i \lvert y_i - \hat{y}_i \rvert}{\sum_i w_i}, \quad w_i = \begin{cases} 5, & \text{if holiday} \\ 1, & \text{otherwise} \end{cases} \]

3 Model 1: Local Linear Anomaly (Phase 1)

Local Linear is a weighted regression that learns a different local coefficient vector for each week-of-year neighborhood and each series. It is simple and interpretable, but sensitive to feature scaling, so standardization is applied before fitting.

3.1 Mathematical Formulation

For series (s) and target seasonal week (w):

\[ \hat{\beta}_{s,w} = \arg\min_{\beta} \sum_{i \in s} K_h\!\left(d(\text{week}_i,w)\right) \left(y_i - \beta_0 - x_i^\top \beta\right)^2 + \lambda \|\beta\|_2^2 \]

and recursive prediction:

\[ \hat{y}^{anom}_{t,s}=\beta_0 + x_{t,s}^\top \beta,\qquad \hat{y}_{t,s}=\hat{y}^{anom}_{t,s}+clim_{t,s} \]

3.2 Plain-Language Meaning of Terms

  • series s: one specific Store + Dept time series.
  • week_i and w: historical week index and target seasonal week where local fitting is centered.
  • K_h(d(.)): a weight that gives more importance to rows close in seasonal calendar (e.g., nearby weeks of year).
  • x_i: the input features for a row (lags, holiday flag, exogenous variables).
  • beta_0, beta: intercept and coefficients learned by local weighted regression.
  • lambda: regularization strength that shrinks coefficients to avoid unstable fits.
  • y^{anom}: anomaly target (real sales minus climatology baseline).
  • clim_{t,s}: baseline seasonal level added back to convert anomaly prediction to sales prediction.

3.3 Workflow

flowchart LR
  A[Read train/test parquet] --> B[Compute climatology per series/store/week]
  B --> C[Create anomalies and lag1/lag2]
  C --> D[Impute missing lag/exogenous values]
  D --> E[Standard-scale Phase 1 features]
  E --> F[Forward-chaining CV]
  F --> G[Fit local weighted ridge by series and seasonal week]
  G --> H[Recursive rollout on validation horizon]
  H --> I[OOF metrics and predictions]
  I --> J[Fit full train and forecast test]

3.4 Tuned Parameters and Effect

Parameter Value How it works
0 kernel tricube Kernel shape over seasonal distance; controls ...
1 bandwidth 6 Neighborhood width in week-of-year units; larg...
2 min_samples 16 Minimum active samples to fit a local model; g...
3 ridge 0.0001 L2 regularization strength in weighted regress...
4 coef_clip 6.0 Post-fit coefficient clipping to prevent extre...
5 anom_clip_scale 2.0 Prediction clipping band around anomaly quanti...
6 lags [1, 2] Autoregressive inputs used in recursive foreca...
7 standard_scaler True Feature standardization before fitting local r...
{'kernel': 'tricube',
 'bandwidth': 6,
 'min_samples': 16,
 'ridge': 0.0001,
 'coef_clip': 6.0,
 'anom_clip_scale': 2.0,
 'lags': [1, 2],
 'use_interactions': False,
 'interaction_cols': [],
 'feature_cols': ['temp_anom',
  'fuel_anom',
  'MarkDown1',
  'MarkDown2',
  'MarkDown3',
  'MarkDown4',
  'MarkDown5',
  'CPI',
  'Unemployment',
  'sales_anom_lag1',
  'sales_anom_lag2',
  'is_holiday_int'],
 'standard_scaler': True,
 'n_folds': 4,
 'val_weeks': 13,
 'max_series': None}
fold train_start train_end val_start val_end wmae mae rmse
0 1 2010-02-05 2011-10-28 2011-11-04 2012-01-27 5989.774614 5916.085624 11465.394577
1 2 2010-02-05 2012-01-27 2012-02-03 2012-04-27 10061.944470 10472.387681 20457.695819
2 3 2010-02-05 2012-04-27 2012-05-04 2012-07-27 11956.737489 11956.737489 21420.299578
3 4 2010-02-05 2012-07-27 2012-08-03 2012-10-26 12294.668865 12479.604773 22343.880598
wmae     10075.78136
mae     10206.203892
rmse    18921.817643
Name: 4, dtype: object

4 Model 1-lite: Local Linear Diagnostic Variant

Phase 1-lite keeps the same Local Linear method but removes the extra exogenous variables (MarkDown1-5, CPI, Unemployment) and uses only temp_anom, fuel_anom plus lags/holiday.

{'kernel': 'tricube',
 'bandwidth': 6,
 'min_samples': 16,
 'ridge': 0.0001,
 'coef_clip': 6.0,
 'anom_clip_scale': 2.0,
 'lags': [1, 2],
 'feature_mode': 'lite',
 'exogenous_features': ['temp_anom', 'fuel_anom'],
 'use_interactions': False,
 'interaction_cols': [],
 'feature_cols': ['temp_anom',
  'fuel_anom',
  'sales_anom_lag1',
  'sales_anom_lag2',
  'is_holiday_int'],
 'standard_scaler': True,
 'n_folds': 4,
 'val_weeks': 13,
 'max_series': None}
fold train_start train_end val_start val_end wmae mae rmse
0 1 2010-02-05 2011-10-28 2011-11-04 2012-01-27 2468.567458 2271.802436 4955.291247
1 2 2010-02-05 2012-01-27 2012-02-03 2012-04-27 2199.682469 2196.184633 4485.720535
2 3 2010-02-05 2012-04-27 2012-05-04 2012-07-27 2028.753667 2028.753667 3948.262807
3 4 2010-02-05 2012-07-27 2012-08-03 2012-10-26 1947.742271 1923.235682 3954.874629
wmae    2161.186467
mae     2104.994105
rmse    4336.037305
Name: 4, dtype: object

5 Model 2: Bayesian Structural AR (Phase 2)

Structural AR combines hierarchical intercepts, autoregressive lags, exogenous drivers, and seasonal Fourier terms in one probabilistic model. It is designed to capture both shared structure and series-specific behavior.

5.1 Mathematical Formulation

In z-score space:

\[ y^{*}_{t,s} \sim \mathcal{N}(\mu_{t,s}, \sigma) \]

\[ \mu_{t,s} = \alpha_s + \beta_{lag}^\top lag_{t,s} + \beta_{exog}^\top exog_{t,s} + \beta_h h_{t,s} + \beta_{tr} trend_t + \beta_f^\top fourier_t \]

with hierarchical intercept:

\[ \alpha_s = \alpha_\mu + \alpha_\sigma z_s,\qquad z_s \sim \mathcal{N}(0,1) \]

5.2 Plain-Language Meaning of Terms

  • y*_{t,s}: normalized anomaly target for week t and series s.
  • mu_{t,s}: model mean prediction before adding random noise.
  • alpha_s: series-specific baseline level; each Store + Dept has its own intercept.
  • beta_lag, lag_{t,s}: coefficients and lag features that capture autoregressive behavior.
  • beta_exog, exog_{t,s}: effects of external drivers (temperature, markdown flags, CPI, unemployment, etc.).
  • beta_h h_{t,s}: holiday contribution.
  • beta_tr trend_t: long-run drift over time.
  • beta_f fourier_t: smooth yearly seasonal pattern via sine/cosine terms.
  • sigma: predictive noise level around the mean.

5.3 Workflow

flowchart LR
  A[Read train parquet] --> B[Compute climatology]
  B --> C[Create anomaly target and lag1/lag2]
  C --> D[Build trend and Fourier seasonal terms]
  D --> E[Fill exogenous NA from train medians]
  E --> F[Standardize lag/exogenous/trend variables]
  F --> G[Fit PyMC structural model via MAP]
  G --> H[Recursive probabilistic rollout by date]
  H --> I[Aggregate draws to mean/sd and intervals]
  I --> J[CV metrics and OOF outputs]

5.4 Tuned Parameters and Effect

Parameter Value How it works
0 max_eval 1800 Maximum evaluations in MAP optimization; highe...
1 pred_draws 40 Number of stochastic recursive draws for predi...
2 fourier_order 3 Number of sine/cosine seasonal harmonics.
3 lag_orders [1, 2] Autoregressive lags used as structural predict...
4 sigma_clusters 0 Number of heteroskedastic noise clusters; 0 me...
5 exogenous_features [temp_anom, fuel_anom, MarkDown1, MarkDown2, M... External covariates entering the structural mean.
{'n_folds': 4,
 'val_weeks': 13,
 'max_eval': 1800,
 'random_seed': 8927,
 'pred_draws': 40,
 'fourier_order': 3,
 'lag_orders': [1, 2],
 'exogenous_features': ['temp_anom',
  'fuel_anom',
  'MarkDown1',
  'MarkDown2',
  'MarkDown3',
  'MarkDown4',
  'MarkDown5',
  'CPI',
  'Unemployment'],
 'sigma_clusters': 0,
 'max_series': 3331,
 'note': 'Structural AR model with hierarchical series intercept, trend, Fourier seasonality, and recursive validation.'}
fold train_start train_end val_start val_end wmae mae rmse runtime_sec
0 1 2010-02-05 2011-10-28 2011-11-04 2012-01-27 2514.047613 2354.646140 4935.317829 64.718200
1 2 2010-02-05 2012-01-27 2012-02-03 2012-04-27 2192.877850 2207.494598 4400.426532 68.368072
2 3 2010-02-05 2012-04-27 2012-05-04 2012-07-27 2138.504577 2138.504577 3799.486079 68.047948
3 4 2010-02-05 2012-07-27 2012-08-03 2012-10-26 1904.502883 1885.828366 3620.883291 65.612177
wmae           2187.483231
mae             2146.61842
rmse           4189.028433
runtime_sec      66.686599
Name: 4, dtype: object

6 Model 3: Elastic Net Anomaly Baseline (Phase 3)

Elastic Net is a linear model with combined L1/L2 regularization. It is a robust baseline for correlated tabular features and keeps an interpretable global linear structure.

6.1 Mathematical Formulation

\[ \hat{\beta} = \arg\min_{\beta} \frac{1}{2n}\|y - X\beta\|_2^2 + \alpha\left( \frac{1-l1\_ratio}{2}\|\beta\|_2^2 + l1\_ratio\|\beta\|_1 \right) \]

6.2 Plain-Language Meaning of Terms

  • X beta: linear combination of all features.
  • ||y - X beta||^2: fit error term (how far predictions are from truth).
  • alpha: total regularization strength.
  • l1_ratio: balance between:
  • L1 penalty: pushes less-useful coefficients to exactly zero (feature selection behavior).
  • L2 penalty: smoothly shrinks coefficients for stability under correlated features.

6.3 Workflow

flowchart LR
  A[Read train/test parquet] --> B[Compute climatology and anomalies]
  B --> C[Create lag1/lag2 and calendar features]
  C --> D[Build exogenous feature matrix]
  D --> E[Median imputation + standard scaling]
  E --> F[Fit Elastic Net on train folds]
  F --> G[Recursive forecasting on validation horizon]
  G --> H[OOF metrics and predictions]
  H --> I[Fit full train and forecast test]

6.4 Tuned Parameters and Effect

Parameter Value How it works
0 alpha 0.02 Overall regularization strength; larger values...
1 l1_ratio 0.2 Mix between L1 (sparsity) and L2 (stability).
2 max_iter 10000 Maximum coordinate-descent iterations for conv...
3 lags [1, 2] Lag features used in recursive setup.
4 features [lag1, lag2, week_of_year, month, is_holiday_i... Full tabular interface for model fitting.
{'model': 'elastic_net',
 'target': 'sales_anom',
 'features': ['lag1',
  'lag2',
  'week_of_year',
  'month',
  'is_holiday_int',
  'temp_anom',
  'fuel_anom',
  'MarkDown1',
  'MarkDown2',
  'MarkDown3',
  'MarkDown4',
  'MarkDown5',
  'CPI',
  'Unemployment'],
 'lags': [1, 2],
 'alpha': 0.02,
 'l1_ratio': 0.2,
 'max_iter': 10000,
 'n_folds': 4,
 'val_weeks': 13,
 'max_series': None,
 'train_path': 'train_feat.parquet',
 'test_path': 'test_feat.parquet'}
fold train_start train_end val_start val_end wmae mae rmse runtime_sec
0 1 2010-02-05 2011-10-28 2011-11-04 2012-01-27 2472.678568 2292.791482 4919.584363 6.225060
1 2 2010-02-05 2012-01-27 2012-02-03 2012-04-27 2163.772249 2175.479837 4420.761374 8.105888
2 3 2010-02-05 2012-04-27 2012-05-04 2012-07-27 2058.784472 2058.784472 3793.832714 7.717091
3 4 2010-02-05 2012-07-27 2012-08-03 2012-10-26 1833.856224 1803.861985 3645.865425 12.398858
wmae           2132.272878
mae            2082.729444
rmse           4195.010969
runtime_sec       8.611724
Name: 4, dtype: object

7 Model 4: Random Forest Anomaly Baseline (Phase 4)

Random Forest is an ensemble of decision trees trained on bootstrap samples. It captures nonlinear interactions with little feature engineering and is robust to mixed feature scales.

7.1 Mathematical Formulation

For (B) trees:

\[ \hat{y}(x)=\frac{1}{B}\sum_{b=1}^{B} T_b(x) \]

where each (T_b) is grown on a bootstrap sample and split candidates are randomized via max_features.

7.2 Plain-Language Meaning of Terms

  • T_b(x): prediction from tree b.
  • B: number of trees in the forest.
  • Final prediction is an average of trees, which reduces variance and improves robustness.
  • Bootstrap sampling means each tree sees a slightly different sampled training set.
  • Random split-feature selection (max_features) decorrelates trees, improving ensemble quality.

7.3 Workflow

flowchart LR
  A[Read train/test parquet] --> B[Compute climatology and anomalies]
  B --> C[Create lag1/lag2 + calendar + exogenous features]
  C --> D[Median imputation]
  D --> E[Train Random Forest on fold train]
  E --> F[Recursive multi-step rollout on fold validation]
  F --> G[OOF metrics and predictions]
  G --> H[Refit on full train and forecast test]

7.4 Tuned Parameters and Effect

Parameter Value How it works
0 n_estimators 120 Number of trees; larger reduces variance but i...
1 max_depth 18 Maximum tree depth; controls model complexity.
2 min_samples_leaf 2 Minimum samples per leaf; regularizes tree par...
3 max_features sqrt Feature subset size considered at each split.
4 lags [1, 2] Autoregressive lag features used recursively.
{'model': 'random_forest',
 'target': 'sales_anom',
 'features': ['lag1',
  'lag2',
  'week_of_year',
  'month',
  'is_holiday_int',
  'temp_anom',
  'fuel_anom',
  'MarkDown1',
  'MarkDown2',
  'MarkDown3',
  'MarkDown4',
  'MarkDown5',
  'CPI',
  'Unemployment'],
 'lags': [1, 2],
 'n_estimators': 120,
 'max_depth': 18,
 'min_samples_leaf': 2,
 'max_features': 'sqrt',
 'n_folds': 4,
 'val_weeks': 13,
 'max_series': None,
 'train_path': 'train_feat.parquet',
 'test_path': 'test_feat.parquet'}
fold train_start train_end val_start val_end wmae mae rmse runtime_sec
0 1 2010-02-05 2011-10-28 2011-11-04 2012-01-27 2443.770022 2243.909149 4898.711983 14.831643
1 2 2010-02-05 2012-01-27 2012-02-03 2012-04-27 2132.907277 2148.134375 4393.405264 20.781198
2 3 2010-02-05 2012-04-27 2012-05-04 2012-07-27 1915.343992 1915.343992 3675.397679 19.938103
3 4 2010-02-05 2012-07-27 2012-08-03 2012-10-26 1738.982368 1719.540629 3554.458799 19.019421
wmae           2057.750915
mae            2006.732036
rmse           4130.493432
runtime_sec      18.642591
Name: 4, dtype: object

8 Model 5: ETS Anomaly Baseline (Phase 5)

ETS here is used on residual anomalies after removing an exogenous linear component. This hybrid keeps classical exponential smoothing dynamics while allowing external regressors to explain systematic variation.

8.1 Mathematical Formulation

Exogenous residual decomposition:

\[ y^{anom}_{t,s}=g(x_{t,s}) + r_{t,s},\qquad g(\cdot)\text{ from Ridge regression} \]

Residual component (r_{t,s}) is modeled by ETS:

\[ r_{t,s} = \ell_{t-1,s}+b_{t-1,s}+s_{t-m,s}+\varepsilon_{t,s} \]

and final forecast:

\[ \hat{y}_{t,s}=clim_{t,s}+\hat{g}(x_{t,s})+\hat{r}_{t,s} \]

8.2 Plain-Language Explanation (What ETS Is)

ETS stands for Error, Trend, Seasonality. It is a classical time-series model that keeps three internal states and updates them week by week:

  • level: the current baseline sales level for the series.
  • trend: the direction/slope (increasing or decreasing behavior).
  • seasonality: repeating seasonal pattern (here around yearly weekly cycle).

In this project we first predict the anomaly using exogenous variables (g(x)), then ETS models the remaining residual pattern (r). The final prediction is:

  • exogenous part
  • plus ETS residual dynamics
  • plus climatology baseline to return to sales scale.

8.3 Plain-Language Meaning of Terms

  • y^{anom}: anomaly sales target.
  • g(x): exogenous linear predictor (Ridge) from external variables.
  • r_{t,s}: residual anomaly after removing exogenous part.
  • ell: level state.
  • b: trend state.
  • s: seasonal state.
  • epsilon: random error term.

8.4 Workflow

flowchart LR
  A[Read train/test parquet] --> B[Compute climatology and anomalies]
  B --> C[Fit exogenous Ridge on anomaly target]
  C --> D[Compute residual anomaly = target - exogenous prediction]
  D --> E[Fit per-series ETS on residuals with fallback candidates]
  E --> F[Recursive fold forecasting by series]
  F --> G[Add exogenous component + climatology back]
  G --> H[OOF metrics and test forecasts]

8.5 Tuned Parameters and Effect

Parameter Value How it works
0 seasonal_periods 52 Season length used by ETS seasonal state.
1 exog_alpha 1.0 Ridge penalty for exogenous linear component.
2 exogenous_features [temp_anom, fuel_anom, MarkDown1, MarkDown2, M... Regressors used before ETS residual modeling.
3 n_folds 4 Forward-chaining fold count.
4 val_weeks 13 Validation horizon length per fold.
{'model': 'ets',
 'target': 'sales_anom (via exogenous + ETS residual)',
 'seasonal_periods': 52,
 'exogenous_features': ['temp_anom',
  'fuel_anom',
  'MarkDown1',
  'MarkDown2',
  'MarkDown3',
  'MarkDown4',
  'MarkDown5',
  'CPI',
  'Unemployment'],
 'exog_alpha': 1.0,
 'n_folds': 4,
 'val_weeks': 13,
 'max_series': None,
 'cv_mode_counts': {'level': 10599,
  'level-trend': 1796,
  'global-fallback': 77,
  'last-value-fallback': 51,
  'level-seasonal': 4},
 'test_mode_counts': {'level': 2758,
  'level-trend': 372,
  'level-seasonal': 8,
  'global-fallback': 11,
  'last-value-fallback': 20},
 'train_path': 'train_feat.parquet',
 'test_path': 'test_feat.parquet'}
fold train_start train_end val_start val_end wmae mae rmse runtime_sec
0 1 2010-02-05 2011-10-28 2011-11-04 2012-01-27 2350.860561 2193.524621 4865.080883 137.155356
1 2 2010-02-05 2012-01-27 2012-02-03 2012-04-27 1965.952051 1969.586155 4035.608919 839.189260
2 3 2010-02-05 2012-04-27 2012-05-04 2012-07-27 1585.766265 1585.766265 3052.148316 952.329147
3 4 2010-02-05 2012-07-27 2012-08-03 2012-10-26 1436.435312 1435.277361 2961.666930 974.137173
wmae           1834.753547
mae              1796.0386
rmse           3728.626262
runtime_sec     725.702734
Name: 4, dtype: object

9 Model 6: AdaBoost Anomaly Baseline (Phase 7)

AdaBoost builds an additive ensemble of shallow trees, where each stage focuses more on difficult residual patterns from previous stages.

9.1 Mathematical Formulation

With base learners (h_m):

\[ F_M(x)=\sum_{m=1}^{M}\nu_m h_m(x) \]

where stage weights depend on chosen boosting loss and learning rate. In practice here, (h_m) are depth-limited regression trees.

9.2 Plain-Language Meaning of Terms

  • h_m(x): base learner at stage m (small regression tree).
  • nu_m: effective contribution of stage m to the final prediction.
  • M: number of boosting stages.
  • learning_rate: shrinkage factor that controls how aggressively each stage updates the model.
  • Boosting logic: each new tree focuses more on errors made by previous trees.

9.3 Workflow

flowchart LR
  A[Read train/test parquet] --> B[Compute climatology and anomaly target]
  B --> C[Create lag1/lag2 + calendar + exogenous features]
  C --> D[Median imputation]
  D --> E[Train AdaBoost regressor on fold train]
  E --> F[Recursive validation rollout]
  F --> G[OOF metrics and predictions]
  G --> H[Refit on full train and produce test forecasts]

9.4 Tuned Parameters and Effect

Parameter Value How it works
0 n_estimators 300 Number of boosting stages.
1 learning_rate 0.03 Shrinkage per stage; smaller needs more estima...
2 max_depth 3 Depth of each base regression tree.
3 loss linear Boosting loss that controls how hard examples ...
4 lags [1, 2] Autoregressive lag inputs for recursive foreca...
{'model': 'adaboost',
 'target': 'sales_anom',
 'features': ['lag1',
  'lag2',
  'week_of_year',
  'month',
  'is_holiday_int',
  'temp_anom',
  'fuel_anom',
  'MarkDown1',
  'MarkDown2',
  'MarkDown3',
  'MarkDown4',
  'MarkDown5',
  'CPI',
  'Unemployment'],
 'lags': [1, 2],
 'n_estimators': 300,
 'learning_rate': 0.03,
 'max_depth': 3,
 'loss': 'linear',
 'n_folds': 4,
 'val_weeks': 13,
 'max_series': None,
 'train_path': 'train_feat.parquet',
 'test_path': 'test_feat.parquet'}
fold train_start train_end val_start val_end wmae mae rmse runtime_sec
0 1 2010-02-05 2011-10-28 2011-11-04 2012-01-27 2479.961992 2278.508843 4905.691398 144.954249
1 2 2010-02-05 2012-01-27 2012-02-03 2012-04-27 2197.511715 2189.320166 4409.023648 189.685843
2 3 2010-02-05 2012-04-27 2012-05-04 2012-07-27 2014.048705 2014.048705 3866.751795 214.710872
3 4 2010-02-05 2012-07-27 2012-08-03 2012-10-26 1865.985156 1843.294438 3739.207824 239.991155
wmae           2139.376892
mae            2081.293038
rmse           4230.168666
runtime_sec      197.33553
Name: 4, dtype: object

10 Final Comparative

10.1 Full-Data Comparison (Phases 1, 2, 3, 4, 5, 7 + Phase 1-lite Diagnostic)

Model Mean WMAE Mean MAE Mean RMSE
0 ETS (Phase 5) 1834.753547 1796.038600 3728.626262
1 Random Forest (Phase 4) 2057.750915 2006.732036 4130.493432
2 Elastic Net (Phase 3) 2132.272878 2082.729444 4195.010969
3 AdaBoost (Phase 7) 2139.376892 2081.293038 4230.168666
4 Local Linear Lite (Phase 1-lite) 2161.186467 2104.994105 4336.037305
5 Structural AR (Phase 2) 2187.483231 2146.618420 4189.028433
6 Local Linear Anomaly (Phase 1) 10075.781360 10206.203892 18921.817643

11 Conclusion

  • Full-data comparison is now apples-to-apples for all six models.
  • All tabular baselines share the same anomaly feature interface with lag1 and lag2.
  • The ranking table above is the reference for selecting the best model in this run.